Skip to content

Fix chemprop init with trainer #181

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

JenniferHem
Copy link
Collaborator

Issue:

Currently a pl.Trainer object can be initialized via from lightning import pytorch as pl, but also via import pytorch_lightning as pl. We use methods to get and set params in Chemprop. Unfortunately a Trainer Object is newly initialized upon calling set or update params. However, at this time an Accelerator is already instatiated, which leads to an issue as lightning requires a string. The "get device" function detects via isinstance wether a CPU or GPU accelerator was chosen and will transform this back into a string. The isinstance Method however only works for the lighning import but fails if pytorch_lighning is used.

Solution:

To prevent adding pytorch_lighning as a dependency we now enhanced the validation. If a user does not use lightning (but used pytorch_lighning instead) a corresponding value Error will now be raised.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants